Heuristic optimisation of multi-task dynamic architecture neural network (DAN2)

نویسندگان

چکیده

Abstract This article proposes a novel method to optimise the Dynamic Architecture Neural Network (DAN2) adapted for multi-task learning problem. The neural network adopts multi-head and serial architecture with DAN2 layers acting as basic subroutine. Adopting dynamic architecture, are added consecutively starting from minimal initial structure. optimisation an iterative heuristic scheme that sequentially optimises shared task-specific until solver converges small tolerance. Application of has demonstrated applicability algorithm simulated datasets. Comparable results Artificial Networks (ANNs) have been obtained in terms accuracy speed.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dynamic Multi-Task Learning with Convolutional Neural Network

Multi-task learning and deep convolutional neural network (CNN) have been successfully used in various fields. This paper considers the integration of CNN and multi-task learning in a novel way to further improve the performance of multiple related tasks. Existing multi-task CNN models usually empirically combine different tasks into a group which is then trained jointly with a strong assumptio...

متن کامل

MtNet: A Multi-Task Neural Network for Dynamic Malware Classification

In this paper, we propose a new multi-task, deep learning architecture for malware classification for the binary (i.e. malware versus benign) malware classification task. All models are trained with data extracted from dynamic analysis of malicious and benign files. For the first time, we see improvements using multiple layers in a deep neural network architecture for malware classification. Th...

متن کامل

Generalization Tower Network: A Novel Deep Neural Network Architecture for Multi-Task Learning

Deep learning (DL) advances state-of-the-art reinforcement learning (RL), by incorporating deep neural networks in learning representations from the input to RL. However, the conventional deep neural network architecture is limited in learning representations for multi-task RL (MT-RL), as multiple tasks can refer to different kinds of representations. In this paper, we thus propose a novel deep...

متن کامل

Backwards Neural Network Optimisation

The problem of predicting multivariable process inputs for a given set of process outputs is solved by training a combination of partitioned feedforward backpropagation neural networks. Training of a single multi-layer network produces large unacceptable errors in the predictions due to the absence of monotonic process output functions. However, by configuring a system with parallel partitioned...

متن کامل

Heuristic optimisation

1. The computational complexity of many problems means that optimal solutions are unlikely to be found in reasonable time in larger instances. 2. Problems may be ill-defined or data imprecise, so that an optimal solution based on estimated data will almost certainly not be optimal for the actual data. In such a situation it is preferable to obtain a robust solution that will be near-optimal ove...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Computing and Applications

سال: 2022

ISSN: ['0941-0643', '1433-3058']

DOI: https://doi.org/10.1007/s00521-022-07851-9